41 research outputs found

    A Multi-level Approach for Identifying Process Change in Cancer Pathways

    Get PDF
    An understudied challenge within process mining is the area of process change over time. This is a particular concern in healthcare, where patterns of care emerge and evolve in response to individual patient needs and through complex interactions between people, process, technology and changing organisational structure. We propose a structured approach to analyse process change over time suitable for the complex domain of healthcare. Our approach applies a qualitative process comparison at three levels of abstraction: a holistic perspective summariz-ing patient pathways (process model level), a middle level perspective based on activity sequences for individuals (trace level), and a fine-grained detail focus on activities (activity level). Our aim is to identify points in time where a process changed (detection), to localise and characterise the change (localisation and characterisation), and to understand process evolution (unravelling). We illus-trate the approach using a case study of cancer pathways in Leeds Cancer Centre where we found evidence of agreement in process change identified at the pro-cess model and activity levels, but not at the trace level. In the experiment we show that this qualitative approach provides a useful understanding of process change over time. Examining change at the three levels provides confirmatory ev-idence of process change where perspectives agree, while contradictory evidence can lead to focused discussions with domain experts. The approach should be of interest to others dealing with processes that undergo complex change over time

    Anti-alignments in conformance checking: the dark side of process models

    Get PDF
    Conformance checking techniques asses the suitability of a process model in representing an underlying process, observed through a collection of real executions. These techniques suffer from the wellknown state space explosion problem, hence handling process models exhibiting large or even infinite state spaces remains a challenge. One important metric in conformance checking is to asses the precision of the model with respect to the observed executions, i.e., characterize the ability of the model to produce behavior unrelated to the one observed. By avoiding the computation of the full state space of a model, current techniques only provide estimations of the precision metric, which in some situations tend to be very optimistic, thus hiding real problems a process model may have. In this paper we present the notion of antialignment as a concept to help unveiling traces in the model that may deviate significantly from the observed behavior. Using anti-alignments, current estimations can be improved, e.g., in precision checking. We show how to express the problem of finding anti-alignments as the satisfiability of a Boolean formula, and provide a tool which can deal with large models efficiently.Peer ReviewedPostprint (author's final draft

    Unfolding-Based Process Discovery

    Get PDF
    This paper presents a novel technique for process discovery. In contrast to the current trend, which only considers an event log for discovering a process model, we assume two additional inputs: an independence relation on the set of logged activities, and a collection of negative traces. After deriving an intermediate net unfolding from them, we perform a controlled folding giving rise to a Petri net which contains both the input log and all independence-equivalent traces arising from it. Remarkably, the derived Petri net cannot execute any trace from the negative collection. The entire chain of transformations is fully automated. A tool has been developed and experimental results are provided that witness the significance of the contribution of this paper.Comment: This is the unabridged version of a paper with the same title appearead at the proceedings of ATVA 201

    Discovering duplicate tasks in transition systems for the simplification of process models

    Get PDF
    This work presents a set of methods to improve the understandability of process models. Traditionally, simplification methods trade off quality metrics, such as fitness or precision. Conversely, the methods proposed in this paper produce simplified models while preserving or even increasing fidelity metrics. The first problem addressed in the paper is the discovery of duplicate tasks. A new method is proposed that avoids overfitting by working on the transition system generated by the log. The method is able to discover duplicate tasks even in the presence of concurrency and choice. The second problem is the structural simplification of the model by identifying optional and repetitive tasks. The tasks are substituted by annotated events that allow the removal of silent tasks and reduce the complexity of the model. An important feature of the methods proposed in this paper is that they are independent from the actual miner used for process discovery.Peer ReviewedPostprint (author's final draft

    Towards an Entropy-based Analysis of Log Variability

    Get PDF
    Rules, decisions, and workflows are intertwined components depicting the overall process. So far imperative workflow modelling languages have played the major role for the description and analysis of business processes. Despite their undoubted efficacy in representing sequential executions, they hide circumstantial information leading to the enactment of activities, and obscure the rationale behind the verification of requirements, dependencies, and goals. This workshop aimed at providing a platform for the discussion and introduction of new ideas related to the development of a holistic approach that encompasses all those aspects. The objective was to extend the reach of the business process management audience towards the decisions and rules community and increase the integration between different imperative, declarative and hybrid modelling perspectives. Out of the high-quality submitted manuscripts, three papers were accepted for publication, with an acceptance rate of 50%. They contributed to foster a fruitful discussion among the participants about the respective impact and the interplay of decision perspective and the process perspective

    Discovery of frequent episodes in event logs

    Get PDF
    Lion's share of process mining research focuses on the discovery of end-to-end process models describing the characteristic behavior of observed cases. The notion of a process instance (i.e., the case) plays an important role in process mining. Pattern mining techniques (such as frequent itemset mining, association rule learning, sequence mining, and traditional episode mining) do not consider process instances. An episode is a collection of partially ordered events. In this paper, we present a new technique (and corresponding implementation) that discovers frequently occurring episodes in event logs thereby exploiting the fact that events are associated with cases. Hence, the work can be positioned in-between process mining and pattern mining. Episode discovery has its applications in, amongst others, discovering local patterns in complex processes and conformance checking based on partial orders. We also discover episode rules to predict behavior and discover correlated behaviors in processes. We have developed a ProM plug-in that exploits efficient algorithms for the discovery of frequent episodes and episode rules. Experimental results based on real-life event logs demonstrate the feasibility and usefulness of the approach

    Mining process performance from event logs

    No full text
    In systems where process executions are not strictly enforced by a predefined process model, obtaining reliable performance information is not trivial. In this paper, we analyzed an event log of a real-life process, taken from a Dutch financial institute, using process mining techniques. In particular, we exploited the alignment technique [2] to gain insights into the control flow and performance of the process execution. We showed that alignments between event logs and discovered process models from process discovery algorithms reveal insights into frequently occurring deviations and how such insights can be exploited to repair the original process models to better reflect reality. Furthermore, we showed that the alignments can be further exploited to obtain performance information. All analysis in this paper is performed using plug-ins within the open-source process mining toolkit ProM

    ProDiGy:Human-in-the-loop process discovery

    No full text
    \u3cp\u3eProcess mining is a discipline that combines the two worlds of business process management and data mining. The central component of process mining is a graphical process model that provides an intuitive way of capturing the logical flow of a process. Traditionally, these process models are either modeled by a user relying on domain expertise only; or discovered automatically by relying entirely on event data. In an attempt to address this apparent gap between user-driven and data-driven process discovery, we present ProDiGy, an alternative approach that enables interactive process discovery by allowing the user to actively steer process discovery. ProDiGy provides the user with automatic recommendations to edit a process model, and quantify and visualize the impact of each recommendation. We evaluated ProDiGy (i) objectively by comparing it with automated discovery approaches and (ii) subjectively by performing a user study with healthcare researchers. Our results show that ProDiGy enables inclusion of domain knowledge in process discovery, which leads to an improvement of the results over the traditional process discovery techniques. Furthermore, we found that ProDiGy also increases the comprehensibility of a process model by providing the user with more control over the discovery of the process model.\u3c/p\u3
    corecore